机械模拟器是流行病学的必不可少的工具,可以在不同条件下探索复杂,动态感染的行为并导航不确定的环境。基于ODE的模型是能够快速模拟且可实现基于梯度的优化的主要范式,但可以简化有关人群同质性的假设。基于代理的模型(ABM)是一种越来越流行的替代范式,可以代表接触相互作用的异质性,并具有颗粒状细节和个人行为的代理。但是,常规的ABM框架没有可区分的,并且在可伸缩性方面提出了挑战。因此,将它们连接到辅助数据源是非平凡的。在本文中,我们介绍了GradABM,这是ABMS的新型可扩展,快速和可区分的设计。 GradABM在商品硬件上几秒钟内运行模拟,并启用快速前进和可区分的反向模拟。这使得可以与深度神经网络合并并无缝整合异质数据源以帮助校准,预测和政策评估。我们通过对实际Covid-19和流感数据集进行了广泛的实验来证明GradABM的功效。我们很乐观,这项工作将使ABM和AI社区更加紧密。
translated by 谷歌翻译
COVID-19的大流行提出了对多个领域决策者的流行预测的重要性,从公共卫生到整个经济。虽然预测流行进展经常被概念化为类似于天气预测,但是它具有一些关键的差异,并且仍然是一项非平凡的任务。疾病的传播受到人类行为,病原体动态,天气和环境条件的多种混杂因素的影响。由于政府公共卫生和资助机构的倡议,捕获以前无法观察到的方面的丰富数据来源的可用性增加了研究的兴趣。这尤其是在“以数据为中心”的解决方案上进行的一系列工作,这些解决方案通过利用非传统数据源以及AI和机器学习的最新创新来增强我们的预测能力的潜力。这项调查研究了各种数据驱动的方法论和实践进步,并介绍了一个概念框架来导航它们。首先,我们列举了与流行病预测相关的大量流行病学数据集和新的数据流,捕获了各种因素,例如有症状的在线调查,零售和商业,流动性,基因组学数据等。接下来,我们将讨论关注最近基于数据驱动的统计和深度学习方法的方法和建模范式,以及将机械模型知识域知识与统计方法的有效性和灵活性相结合的新型混合模型类别。我们还讨论了这些预测系统的现实部署中出现的经验和挑战,包括预测信息。最后,我们重点介绍了整个预测管道中发现的一些挑战和开放问题。
translated by 谷歌翻译
概率分层时间序列预测是时间序列预测的重要变体,其目标是建模和预测具有基本层次关系的多元时间序列。大多数方法都集中在点预测上,并且不提供良好的概率预测分布。最近的最先进的概率预测方法还对点预测和分布样本施加了层次关系,这并不能说明预测分布的相干性。先前的作品还默默地假设数据集始终与给定的层次关系一致,并且不适应显示出与此假设偏差的现实世界数据集。我们弥合了这两个差距,并提出了Profhit,这是一个完全概率的层次预测模型,共同模拟整个层次结构的预测分布。 Profhit使用一种灵活的概率贝叶斯方法,并引入了一种新颖的分布相干性正规化,以从层次关系中学习整个预测分布,以实现强大和校准的预测以及适应不同层次结构一致性的数据集。在评估广泛数据集的PROFHIT时,我们观察到准确性和校准的性能提高了41-88%。由于对完整分布的相干性进行了建模,我们观察到,即使缺少多达10%的输入时间序列数据,其他方法的性能严重降低70%以上,即使最多10%的输入时间序列数据也可以提供可靠的预测。
translated by 谷歌翻译
准确可靠的流行病预测是对公共卫生规划和疾病缓解影响的重要问题。大多数现有的疫情预测模型无视不确定性量化,导致错误校准的预测。近期神经模型的作品,用于不确定感知的时序预测也有几个限制;例如很难在贝叶斯NNS中指定有意义的前瞻,而Deep Leaseming的方法在实践中是计算昂贵的。在本文中,我们填补了这个重要的差距。我们将预测任务模拟为概率生成过程,并提出了一种名为EPIFNP的功能神经过程模型,其直接模拟预测值的概率密度。 EPIFNP利用动态随机相关图来模拟非参数方式之间序列之间的相关性,并设计不同的随机潜变量以捕获不同视角的功能不确定性。我们在实时流感预测环境中的广泛实验表明,EPIFNP在准确性和校准度量中显着优于先前的最先进模型,精度高达2.5倍,校准2.4倍。此外,由于其生成过程的性质,EPIFNP了解当前季节与历史季节类似模式之间的关系,从而实现可解释的预测。超越疫情预测,EPIFNP可以是独立的利益,以便在深度顺序模型中推进预测性分析的深度顺序模型
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译
This article concerns Bayesian inference using deep linear networks with output dimension one. In the interpolating (zero noise) regime we show that with Gaussian weight priors and MSE negative log-likelihood loss both the predictive posterior and the Bayesian model evidence can be written in closed form in terms of a class of meromorphic special functions called Meijer-G functions. These results are non-asymptotic and hold for any training dataset, network depth, and hidden layer widths, giving exact solutions to Bayesian interpolation using a deep Gaussian process with a Euclidean covariance at each layer. Through novel asymptotic expansions of Meijer-G functions, a rich new picture of the role of depth emerges. Specifically, we find that the posteriors in deep linear networks with data-independent priors are the same as in shallow networks with evidence maximizing data-dependent priors. In this sense, deep linear networks make provably optimal predictions. We also prove that, starting from data-agnostic priors, Bayesian model evidence in wide networks is only maximized at infinite depth. This gives a principled reason to prefer deeper networks (at least in the linear case). Finally, our results show that with data-agnostic priors a novel notion of effective depth given by \[\#\text{hidden layers}\times\frac{\#\text{training data}}{\text{network width}}\] determines the Bayesian posterior in wide linear networks, giving rigorous new scaling laws for generalization error.
translated by 谷歌翻译
In this paper we study the smooth strongly convex minimization problem $\min_{x}\min_y f(x,y)$. The existing optimal first-order methods require $\mathcal{O}(\sqrt{\max\{\kappa_x,\kappa_y\}} \log 1/\epsilon)$ of computations of both $\nabla_x f(x,y)$ and $\nabla_y f(x,y)$, where $\kappa_x$ and $\kappa_y$ are condition numbers with respect to variable blocks $x$ and $y$. We propose a new algorithm that only requires $\mathcal{O}(\sqrt{\kappa_x} \log 1/\epsilon)$ of computations of $\nabla_x f(x,y)$ and $\mathcal{O}(\sqrt{\kappa_y} \log 1/\epsilon)$ computations of $\nabla_y f(x,y)$. In some applications $\kappa_x \gg \kappa_y$, and computation of $\nabla_y f(x,y)$ is significantly cheaper than computation of $\nabla_x f(x,y)$. In this case, our algorithm substantially outperforms the existing state-of-the-art methods.
translated by 谷歌翻译
This paper presents a solution to the GenChal 2022 shared task dedicated to feedback comment generation for writing learning. In terms of this task given a text with an error and a span of the error, a system generates an explanatory note that helps the writer (language learner) to improve their writing skills. Our solution is based on fine-tuning the T5 model on the initial dataset augmented according to syntactical dependencies of the words located within indicated error span. The solution of our team "nigula" obtained second place according to manual evaluation by the organizers.
translated by 谷歌翻译